107 research outputs found

    Adaptation to Delayed Force Perturbations in Reaching Movements

    Get PDF
    Adaptation to deterministic force perturbations during reaching movements was extensively studied in the last few decades. Here, we use this methodology to explore the ability of the brain to adapt to a delayed velocity-dependent force field. Two groups of subjects preformed a standard reaching experiment under a velocity dependent force field. The force was either immediately proportional to the current velocity (Control) or lagged it by 50 ms (Test). The results demonstrate clear adaptation to the delayed force perturbations. Deviations from a straight line during catch trials were shifted in time compared to post-adaptation to a non-delayed velocity dependent field (Control), indicating expectation to the delayed force field. Adaptation to force fields is considered to be a process in which the motor system predicts the forces to be expected based on the state that a limb will assume in response to motor commands. This study demonstrates for the first time that the temporal window of this prediction needs not to be fixed. This is relevant to the ability of the adaptive mechanisms to compensate for variability in the transmission of information across the sensory-motor system

    Asymmetric interlimb transfer of concurrent adaptation to opposing dynamic forces

    Get PDF
    Interlimb transfer of a novel dynamic force has been well documented. It has also been shown that unimanual adaptation to opposing novel environments is possible if they are associated with different workspaces. The main aim of this study was to test if adaptation to opposing velocity dependent viscous forces with one arm could improve the initial performance of the other arm. The study also examined whether this interlimb transfer occurred across an extrinsic, spatial, coordinative system or an intrinsic, joint based, coordinative system. Subjects initially adapted to opposing viscous forces separated by target location. Our measure of performance was the correlation between the speed profiles of each movement within a force condition and an ‘average’ trajectory within null force conditions. Adaptation to the opposing forces was seen during initial acquisition with a significantly improved coefficient in epoch eight compared to epoch one. We then tested interlimb transfer from the dominant to non-dominant arm (D → ND) and vice-versa (ND → D) across either an extrinsic or intrinsic coordinative system. Interlimb transfer was only seen from the dominant to the non-dominant limb across an intrinsic coordinative system. These results support previous studies involving adaptation to a single dynamic force but also indicate that interlimb transfer of multiple opposing states is possible. This suggests that the information available at the level of representation allowing interlimb transfer can be more intricate than a general movement goal or a single perceived directional error

    Concurrent adaptation to opposing visual displacements during an alternating movement.

    Get PDF
    It has been suggested that, during tasks in which subjects are exposed to a visual rotation of cursor feedback, alternating bimanual adaptation to opposing rotations is as rapid as unimanual adaptation to a single rotation (Bock et al. in Exp Brain Res 162:513–519, 2005). However, that experiment did not test strict alternation of the limbs but short alternate blocks of trials. We have therefore tested adaptation under alternate left/right hand movement with opposing rotations. It was clear that the left and right hand, within the alternating conditions, learnt to adapt to the opposing displacements at a similar rate suggesting that two adaptive states were formed concurrently. We suggest that the separate limbs are used as contextual cues to switch between the relevant adaptive states. However, we found that during online correction the alternating conditions had a significantly slower rate of adaptation in comparison to the unimanual conditions. Control conditions indicate that the results are not directly due the alternation between limbs or to the constant switching of vision between the two eyes. The negative interference may originate from the requirement to dissociate the visual information of these two alternating displacements to allow online control of the two arms

    Stimulation of PPC affects the mapping between motion and force signals for stiffness perception but not motion control

    Get PDF
    How motion and sensory inputs are combined to assess an object’s stiffness is still unknown. Here, we provide evidence for the existence of a stiffness estimator in the human posterior parietal cortex (PPC). We showed previously that delaying force feedback with respect to motion when interacting with an object caused participants to underestimate its stiffness. We found that applying theta-burst transcranial magnetic stimulation (TMS) over the PPC, but not the dorsal premotor cortex, enhances this effect without affecting movement control. We explain this enhancement as an additional lag in force signals. This is the first causal evidence that the PPC is not only involved in motion control, but also has an important role in perception that is disassociated from action. We provide a computational model suggesting that the PPC integrates position and force signals for perception of stiffness and that TMS alters the synchronization between the two signals causing lasting consequences on perceptual behavior

    Advances in Human-Robot Handshaking

    Full text link
    The use of social, anthropomorphic robots to support humans in various industries has been on the rise. During Human-Robot Interaction (HRI), physically interactive non-verbal behaviour is key for more natural interactions. Handshaking is one such natural interaction used commonly in many social contexts. It is one of the first non-verbal interactions which takes place and should, therefore, be part of the repertoire of a social robot. In this paper, we explore the existing state of Human-Robot Handshaking and discuss possible ways forward for such physically interactive behaviours.Comment: Accepted at The 12th International Conference on Social Robotics (ICSR 2020) 12 Pages, 1 Figur

    Single Neurons in M1 and Premotor Cortex Directly Reflect Behavioral Interference

    Get PDF
    Some motor tasks, if learned together, interfere with each other's consolidation and subsequent retention, whereas other tasks do not. Interfering tasks are said to employ the same internal model whereas noninterfering tasks use different models. The division of function among internal models, as well as their possible neural substrates, are not well understood. To investigate these questions, we compared responses of single cells in the primary motor cortex and premotor cortex of primates to interfering and noninterfering tasks. The interfering tasks were visuomotor rotation followed by opposing visuomotor rotation. The noninterfering tasks were visuomotor rotation followed by an arbitrary association task. Learning two noninterfering tasks led to the simultaneous formation of neural activity typical of both tasks, at the level of single neurons. In contrast, and in accordance with behavioral results, after learning two interfering tasks, only the second task was successfully reflected in motor cortical single cell activity. These results support the hypothesis that the representational capacity of motor cortical cells is the basis of behavioral interference and division between internal models

    A biologically inspired neural network controller for ballistic arm movements

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In humans, the implementation of multijoint tasks of the arm implies a highly complex integration of sensory information, sensorimotor transformations and motor planning. Computational models can be profitably used to better understand the mechanisms sub-serving motor control, thus providing useful perspectives and investigating different control hypotheses. To this purpose, the use of Artificial Neural Networks has been proposed to represent and interpret the movement of upper limb. In this paper, a neural network approach to the modelling of the motor control of a human arm during planar ballistic movements is presented.</p> <p>Methods</p> <p>The developed system is composed of three main computational blocks: 1) a parallel distributed learning scheme that aims at simulating the internal inverse model in the trajectory formation process; 2) a pulse generator, which is responsible for the creation of muscular synergies; and 3) a limb model based on two joints (two degrees of freedom) and six muscle-like actuators, that can accommodate for the biomechanical parameters of the arm. The learning paradigm of the neural controller is based on a pure exploration of the working space with no feedback signal. Kinematics provided by the system have been compared with those obtained in literature from experimental data of humans.</p> <p>Results</p> <p>The model reproduces kinematics of arm movements, with bell-shaped wrist velocity profiles and approximately straight trajectories, and gives rise to the generation of synergies for the execution of movements. The model allows achieving amplitude and direction errors of respectively 0.52 cm and 0.2 radians.</p> <p>Curvature values are similar to those encountered in experimental measures with humans.</p> <p>The neural controller also manages environmental modifications such as the insertion of different force fields acting on the end-effector.</p> <p>Conclusion</p> <p>The proposed system has been shown to properly simulate the development of internal models and to control the generation and execution of ballistic planar arm movements. Since the neural controller learns to manage movements on the basis of kinematic information and arm characteristics, it could in perspective command a neuroprosthesis instead of a biomechanical model of a human upper limb, and it could thus give rise to novel rehabilitation techniques.</p

    Shaping Embodied Neural Networks for Adaptive Goal-directed Behavior

    Get PDF
    The acts of learning and memory are thought to emerge from the modifications of synaptic connections between neurons, as guided by sensory feedback during behavior. However, much is unknown about how such synaptic processes can sculpt and are sculpted by neuronal population dynamics and an interaction with the environment. Here, we embodied a simulated network, inspired by dissociated cortical neuronal cultures, with an artificial animal (an animat) through a sensory-motor loop consisting of structured stimuli, detailed activity metrics incorporating spatial information, and an adaptive training algorithm that takes advantage of spike timing dependent plasticity. By using our design, we demonstrated that the network was capable of learning associations between multiple sensory inputs and motor outputs, and the animat was able to adapt to a new sensory mapping to restore its goal behavior: move toward and stay within a user-defined area. We further showed that successful learning required proper selections of stimuli to encode sensory inputs and a variety of training stimuli with adaptive selection contingent on the animat's behavior. We also found that an individual network had the flexibility to achieve different multi-task goals, and the same goal behavior could be exhibited with different sets of network synaptic strengths. While lacking the characteristic layered structure of in vivo cortical tissue, the biologically inspired simulated networks could tune their activity in behaviorally relevant manners, demonstrating that leaky integrate-and-fire neural networks have an innate ability to process information. This closed-loop hybrid system is a useful tool to study the network properties intermediating synaptic plasticity and behavioral adaptation. The training algorithm provides a stepping stone towards designing future control systems, whether with artificial neural networks or biological animats themselves

    Adaptive tuning functions arise from visual observation of past movement

    Get PDF
    Visual observation of movement plays a key role in action. For example, tennis players have little time to react to the ball, but still need to prepare the appropriate stroke. Therefore, it might be useful to use visual information about the ball trajectory to recall a specific motor memory. Past visual observation of movement (as well as passive and active arm movement) affects the learning and recall of motor memories. Moreover, when passive or active, these past contextual movements exhibit generalization (or tuning) across movement directions. Here we extend this work, examining whether visual motion also exhibits similar generalization across movement directions and whether such generalization functions can explain patterns of interference. Both the adaptation movement and contextual movement exhibited generalization beyond the training direction, with the visual contextual motion exhibiting much broader tuning. A second experiment demonstrated that this pattern was consistent with the results of an interference experiment where opposing force fields were associated with two separate visual movements. Overall, our study shows that visual contextual motion exhibits much broader (and shallower) tuning functions than previously seen for either passive or active movements, demonstrating that the tuning characteristics of past motion are highly dependent on their sensory modality
    corecore